The $12 Million Question

A Fortune 500 energy company bet big on AI. They rolled out Microsoft Copilot to thousands of employees—a multi-million dollar commitment that was supposed to transform how their workforce operated.

Six months in, the CFO walked into a leadership meeting and asked a question nobody could answer: “What exactly are we getting for this money?”

The licenses were active. IT had done their job. But usage was flat, and those productivity gains from the business case? Still theoretical.

This wasn’t a technology failure. The software worked fine. It was something more frustrating, the organization owned a powerful tool but had no way to prove it was delivering value. Worse, they had no way to figure out why it wasn’t.

If you have spent enough time in finance, you know this story. You also know that data problems rarely appear all at once. They creep in quietly through one manual spreadsheet, one inconsistent KPI, one temporary reconciliation, and then another. Eventually, the business is making million-dollar decisions based on information no one fully trusts.

The Problem With Native Reporting

Microsoft’s built-in dashboards showed usage numbers. Fine. But those numbers raised more questions than they answered.

Were employees actually weaving Copilot into their daily work, or just poking at it occasionally out of curiosity? The dashboards couldn’t tell you. Which teams had figured it out, and which were floundering? No idea. What separated someone who’d transformed their workflow from someone who’d tried it twice and given up?

The data existed somewhere, presumably. But it was buried, aggregated, smoothed over into charts that looked fine in a slide deck but couldn’t drive any real decisions.

Leadership faced a lousy choice: keep pouring money into generic training programs and hope something clicked, or quietly accept that a chunk of their AI investment might never pay off.

And here’s what really stung—the opportunity cost. Somewhere in that workforce, there were productivity gains waiting to be unlocked. Real money. Real competitive advantage. But without visibility into what was actually happening, those gains might as well not exist.

Looking at the Problem Differently

When TechWish got involved, we started with a basic observation that seems obvious in hindsight: Departmental averages are almost useless for understanding adoption.

Think about it. “Finance Department” includes analysts buried in spreadsheets, controllers reviewing reports, strategists building models, and admins scheduling meetings. These people have completely different jobs. They’d use Copilot in completely different ways, if they used it at all. Lumping them together and looking at an average tells you almost nothing.

So we built something different. Our AI Adoption Intelligence Platform doesn’t just track whether people use Copilot. It maps usage against what we call Job Archetypes—behavioral profiles like The Researcher, The Coder, The Strategist, The Communicator. These cut across org chart lines to group people by how they actually work.

Suddenly you can see patterns. The Researchers in Legal are crushing it, but the Researchers in R&D haven’t touched it. Why? The Strategists across the company are struggling—turns out nobody’s shown them the features that matter for their workflow. Now you’ve got something actionable.

Under the Hood

Three data streams make this work.

First, we tap into Microsoft Graph APIs to pull transaction-level Copilot data. Not just “did they log in” but what did they actually do? Which apps? What kinds of queries? How often, and for how long? This creates a behavioral fingerprint for every user that’s far richer than anything native reporting offers.

Second, we connect to Workday (or whatever HR system you’re running) to layer in organizational context. Now that behavioral data has meaning—you know someone’s role, their team, how long they’ve been around, where they sit in the hierarchy.

Third, we track every enablement activity. Every training session, every office hour, every tutorial and targeted outreach. This closes the loop—you can finally see which interventions actually move the needle and which ones are just checking boxes.

Put it together and you can answer questions that were previously unanswerable. Which job archetypes get the most value from Copilot? Who are the hidden champions whose habits could be taught to others? Is that expensive training program you ran last month actually working?

What We Actually Did

Data’s only useful if you do something with it. Here’s how this played out.

Finding the champions: The platform surfaced users who’d cracked the code—people who’d figured out, mostly on their own, how to make Copilot genuinely useful for their work. Interestingly, these weren’t always the tech-savvy folks you’d expect. Some of the best champions were in corners of the org that nobody was paying attention to.

Documenting what worked: We sat down with these champions and extracted their playbooks. Not generic “here’s how to use Copilot” stuff—specific workflows, specific prompts, specific use cases tailored to how each archetype actually does their job. Tribal knowledge, basically, but written down and made teachable.

Targeted training: This is where it gets good. Instead of blasting the same training to everyone, we matched content to archetypes. Researchers got different material than Strategists. Teams that were struggling got hands-on help; teams that were already performing got advanced techniques. Every training dollar went where it would actually matter.

Measuring and adjusting: Because we could see what was working, we could iterate fast. Double down on interventions that drove results. Kill the ones that didn’t. The organization built a muscle for driving AI adoption, not just a one-time program.

The Result: What Happened

Ninety days in, active Copilot utilization was up 44%. Not just logins; real, sustained, high-value usage.

Six months in, the ROI on the analytics platform alone exceeded 20x. And the CFO finally had an answer to that question: concrete evidence, defensible numbers, clear line of sight from investment to productivity gain.

Some other outcomes worth noting:

We analyzed 50 million Copilot interactions in the first 30 days.

That’s not a typo. Fifty million. For the first time, leadership had a genuine single source of truth about how AI was being used across the enterprise.

Reporting overhead dropped by 95%. All that analyst time previously spent wrestling with CSV exports and building spreadsheets? Gone. Automated dashboards handled it.

But the biggest win was strategic. Copilot stopped being a cost center that required faith to justify. It became a measurable asset with clear, role-specific value drivers. When leadership evaluates the next AI investment, they’ve got a framework for understanding whether it’ll actually deliver.

The Bigger Picture

This story plays out again and again across enterprises adopting AI. Big investment in powerful technology. Disappointing adoption. Lots of hand-wraving about “change management” without any real insight into what’s going wrong.

The technology isn’t the problem. Copilot works. So do most of these enterprise AI tools. The problem is treating deployment like the finish line when it’s actually the starting gun.

Vendor dashboards are designed to show that software is being used. They’re not designed to help you understand how it’s being used, or to connect usage patterns to business value. That gap—between “deployed” and “delivering”—is where most AI investments go to die.

Closing that gap takes more than dashboards. It takes understanding how different roles actually work, identifying which workflows benefit from AI and which don’t, and building the organizational capacity to drive real behavioral change. Not glamorous work. But it’s the difference between AI as an expense line and AI as a competitive weapon.

For this energy company, the approach turned a stalled investment into a success story. The question for everyone else is simpler: how much value is sitting trapped in your own AI deployments?

About TechWish

TechWish is a Microsoft Solutions Partner focused on AI adoption and workforce intelligence. Our AI Adoption Intelligence Platform helps large enterprises get past surface-level metrics to understand how AI actually creates business value. We work with Fortune 500 companies across energy, financial services, healthcare, and manufacturing to turn AI investments into measurable productivity gains.

Interested in unlocking the full value of your AI investments?

Let us start the conversation.

Reimagine Possibilities

Technology That Fuels Growth

Achieve efficiency, scalability, and competitive advantage with the right solutions.


Get in touch

Harika Vangeti

Principal Data Engineer/Architect, TechWish